Employing Two Question Answering Systems in TREC 2005
نویسندگان
چکیده
In 2005, the TREC QA track had two separate tasks: the main task and the relationship task. To participate in TREC 2005 we employed two different QA systems. PowerAnswer-2 was used in the main task, whereas PALANTIR was used for the relationship questions. For the main task, new this year is the use of events as targets in addition to the nominal concepts used last year. Event targets ranged from a nominal event such as “Preakness 1998” to a description of an event as in “Plane clips cable wires in Italian resort”. There were 17 event targets total. Unlike nominal targets, which most often act as the topic of the subsequent questions, events provide a context for the questions. Therefore, targets representing events had questions that asked about participants in the event, about characteristics of the vent and furthermore, had temporal constraints. Also many questions referred to answers of previous questions. To complicate matters, several answers could be candidate for the anaphors used in follow-up questions, but salience mattered. This introduced new complexities for the coreference resolution. Consider the following example:
منابع مشابه
JAVELIN I and II Systems at TREC 2005
The JAVELIN team at Carnegie Mellon University submitted three question-answering runs for the TREC 2005 evaluation. The JAVELIN I system was used to generate a single submission to the main track, and the JAVELIN II system was used to generate two submissions to the relationship track. In the sections that follow, we separately describe each system and the submission(s) it produced, and conclu...
متن کاملDalTREC 2006 QA System Jellyfish: Regular Expressions Mark-and-Match Approach to Question Answering
We present a question-answering system Jellyfish. Our approach is based on marking and matching steps that are implemented using the methodology of cascaded regular-expression rewriting. We present the system architecture and evaluate the system using the TREC 2004, 2005, and 2006 datasets. TREC 2004 was used as a training dataset, while TREC 2005 and TREC 2006 were used as testing dataset. The...
متن کاملQuestion Answering with QED at TREC 2005
This report describes the system developed by the University of Edinburgh and the University of Sydney for the TREC-2005 question answering evaluation exercise. The backbone of our question-answering platform is QED, a linguistically-principled QA system. We experimented with external sources of knowledge, such as Google and Wikipedia, to enhance the performance of QED, especially for reranking...
متن کاملQACTIS-based Question Answering at TREC 2005
The QACTIS system is being developed for the eventual purpose of providing a user the capability of multilingual question-answering from multimedia. QACTIS was tested at TREC-2005 as a means of identifying its successes and limitations in answering questions specifically from English newswire text as it moves in the direction of multilingual , multimedia question answering. In this paper, we pr...
متن کاملLearning to Classify Questions
An automatic classifier of questions in terms of their expected answer type is a desirable component of many question-answering systems. It eliminates the manual labour and the lack of portability of classifying them semi-automatically. We explore the performance of several learning algorithms (SVM, neural networks, boosting) based on two purely lexical feature sets on a dataset of almost 2000 ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2005